Previous Blogs

April 6, 2021
AWS and Verizon Bring Private 5G and Edge Computing to Life with Corning

March 31, 2021
Cisco Wants to Make Hybrid Work Actually Work

March 23, 2021
Intel Reinvigorates Manufacturing Strategy with IDM 2.0

March 16, 2021
AMD Refocuses on Business with Latest Epyc and Ryzen Pro Launches

March 9, 2021
GlobalFoundries and Bosch Emphasize Shift in Automotive Semis

March 2, 2021
Microsoft Brings AI Appliances and Improved Connectivity to IoT

February 23, 2021
Cybersecurity Deal Highlights Benefits of 5G and AI in PCs

February 16, 2021
Will Conference Rooms Help or Hurt in the Return to Work?

February 9, 2021
The Ever-Present Need for Simplicity in Tech

February 2, 2021
Poly Makes Videoconferencing Personal

January 26, 2021
2021 Shaping Up to Be Big Year for Automotive Tech

January 12, 2021
What CES 2021 Says About Our Future

January 5, 2021
Big Tech Trends for 2021 Are Hybridization and Customization

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

April 13, 2021
Nvidia Steps Up Enterprise and Automotive Efforts with GTC Announcements

By Bob O'Donnell

For those of you who haven’t been paying close attention, Nvidia is no longer a gaming company. Oh, sure, they continue to make some of the best-performing graphics chips for gaming and host a global cloud-based gaming service called GeForce Now. But if you take a step back and survey the full range of the company’s offerings, it’s significantly broader than it has ever been—a point CEO Jensen Huang hammered home during his GTC (GPU Technology Conference) keynote this week. In his words, “Nvidia is now a full-stack computing company.”

In fact, ironically, there was probably as much news about non-GPU chips as there were GPUs at GTC, reflecting how much wider Nvidia’s scope has become over the past few years. Between a new 5 nm Arm-based CPU codenamed “Grace”, a broadening of the DPU line created from Nvidia’s Mellanox acquisition, new additions to its automotive-focused chips and platforms, and discussions of quantum computing, 5G data processing and more, the company is reaching to ever broader portions of the computing landscape. Not to be left out, of course there were new GPU and GPU-related announcements, including an impressive array of new cloud-based AI-powered GPU software and services.

To its credit, Nvidia has been broadening the range of applications for GPUs for several years now. Its impact on machine learning, deep neural networks and other sophisticated AI models has been well documented, and at this year’s show, the company continued to extend that reach. In particular, Nvidia highlighted its efforts targeted toward the enterprise with a whole range of pre-built AI models that companies can more easily deploy for a wide range of applications. The previously announced Jarvis conversational AI tool, for example, is now generally available and can be used by businesses to build automated customer service tools.

The newly unveiled Maxine project is designed to improve video quality over low-bandwidth connections and to perform automatic transcription and real-time translation—timely and practical capabilities that many collaboration tools have but could likely be enhanced with the integration of Nvidia’s cloud-based, AI-powered tools. The company also made an important announcement with VMware, noting that Nvidia’s AI tools and platforms can now run on virtualized VMWare environments in addition to dedicated bare metal hardware. While seemingly mundane at first glance, this is actually a critically important development for the many businesses that run a good portion of their workloads on VMware.

As impressive as the GPU-focused applications for the enterprise are, however, much of the biggest news splash (as well as some of the biggest confusion) out of GTC came from the company’s strategic shift to three different types of chips: GPUs, DPUs and CPUs. In fact, CEO Huang neatly summarized the approach with a slide that showed the roadmaps for the three different chip lines out through 2025, highlighting how each line will get updates every few years, but with different start points, allowing the company to have at least one (and sometimes two) major architectural advance(s) every year.

As mentioned, the DPU line (codenamed BlueField) builds on the high-speed networking technology Nvidia acquired when it purchased Mellanox last April. Specifically targeted at data centers, HPC (high-performance computing), and cloud computing applications, the BlueField line of chips is ideally suited to speed the performance of modern web-based applications. Because these applications are split into numerous smaller containers that often run on multiple physical servers, they are highly dependent on what’s commonly called “east-west” traffic between computing racks in a data center. Importantly, however, these same software development principles are being used for an increasingly wide range of applications, including automotive, which helps explain why the latest generation of automotive SOC (codenamed Atlan and discussed below) includes a BlueField core in its design.

The new CPU line—which unquestionably generated the most buzz, as well as the most confusion—is an Arm-based design codenamed Grace (for computing pioneer Grace Hopper—a classy move on Nvidia’s part). Though many initial reports suggested this was a competitive product to Intel and AMD x86 server CPUs, the truth is that the original implementation goal for Grace is only for HPC and other huge AI model-based workloads. It is not a general-purpose CPU design. Still, in the kinds of advanced, very demanding and memory-intensive AI applications that Nvidia is initially targeting for Grace, it solves the critical problem of connecting GPUs to system memory at significantly faster speeds than traditional x86 architectures can provide. Clearly, this isn’t an application that every organization can take advantage of, but for the growing number of organizations that are building large AI models, it’s still very important.

Of course, part of the reason for the confusion is that Nvidia is currently trying to purchase Arm, so any connection between the two is bound to get inflated into a larger issue. Plus, Nvidia did demonstrate a range of different applications where it’s working to combine its IP with Arm-based products, including cloud computing with Amazon’s AWS Graviton, scientific computing in conjunction with Ampere’s general purpose Altra server CPUs, 5G network infrastructure and edge computing with Marvell’s OCTEON, and PCs with MediaTek’s MT819x SOCs. In addition, as with a next-generation BlueField DPU core, the new Atlan automotive SOC diagram incorporates a “Grace Next” CPU core, generating yet even more speculation.

Speaking of which, Nvidia also highlighted a number of automotive-related announcements at GTC. The company’s next-generation platform is codenamed Orin and is expected to show up in vehicles from big players like Mercedes-Benz, Volvo, Hyundai and Audi starting next year. The company also announced the Orin central computer, where a single chip can be virtualized to run four different applications, including the instrument cluster, infotainment system, passenger interaction and monitoring, and autonomous and assisted driving features with confidence view—a visual display of what the car’s computers are seeing, designed to give passengers confidence that it’s functioning properly. The company also debuted their eighth generation Hyperion autonomous vehicle (AV) platform, which incorporates multiple Orin chips, image sensors, radar, lidar, and the company’s latest AV software.

The big news, however, once again was a new chip—the aforementioned Atlan—scheduled to arrive in 2025. While many may find a multi-year pre-announcement to be overkill, it’s relatively standard practice in the automotive industry, where they typically work on cars three years in advance of their introduction. Atlan is intriguing on many levels, not the least of which is the fact that it’s expected to quadruple the computing power of Orin (which comes out in 2022) and reach a rate of 1,000 TOPS (tera operations per second). As hinted at earlier, Atlan is also the first Nvidia product to include all three of the company’s core chip architectures—GPU, DPU and CPU—in a single semiconductor design. The details remain vague, but the next generation of each of the current architectures is expected to be part of Atlan, potentially making it a poster child of the company’s expanded opportunities, as well as a great example of the technical sophistication of automobiles to be released in that era. Either way, Atlan will definitely be something to watch.

All told, Nvidia told a very wide ranging and impressive story at GTC. In fact, I haven’t even mentioned, until now, its Omniverse platform for 3D collaboration, its DGX supercomputer-like hardware platforms, and a host of other important announcements the company made. There was simply too much to cover in a single column. However, this much is apparent. Nvidia is clearly focused on a much broader range of opportunities than it ever has. So, even though its gaming fans may have been disappointed by the lack of news for them, anyone thinking about the future of computing can’t help but be impressed by the width and depth of what Nvidia unveiled. Gaming and GPUs, it seems, are just the start.

Here’s a link to the original column: https://www.techspot.com/news/89280-if-you-havent-paying-attention-nvidia-no-longer.html

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE

 

b